Goto

Collaborating Authors

 augmented system


Cascaded Tightly-Coupled Observer Design for Single-Range-Aided Inertial Navigation

Sifour, Oussama, Berkane, Soulaimane, Tayebi, Abdelhamid

arXiv.org Artificial Intelligence

This work introduces a single-range-aided navigation observer that reconstructs the full state of a rigid body using only an Inertial Measurement Unit (IMU), a body-frame vector measurement (e.g., magnetometer), and a distance measurement from a fixed anchor point. The design first formulates an extended linear time-varying (LTV) system to estimate body-frame position, body-frame velocity, and the gravity direction. The recovered gravity direction, combined with the body-frame vector measurement, is then used to reconstruct the full orientation on $\mathrm{SO}(3)$, resulting in a cascaded observer architecture. Almost Global Asymptotic Stability (AGAS) of the cascaded design is established under a uniform observability condition, ensuring robustness to sensor noise and trajectory variations. Simulation studies on three-dimensional trajectories demonstrate accurate estimation of position, velocity, and orientation, highlighting single-range aiding as a lightweight and effective modality for autonomous navigation.


Learning Swarm Interaction Dynamics from Density Evolution

Mavridis, Christos, Tirumalai, Amoolya, Baras, John

arXiv.org Artificial Intelligence

We consider the problem of understanding the coordinated movements of biological or artificial swarms. In this regard, we propose a learning scheme to estimate the coordination laws of the interacting agents from observations of the swarm's density over time. We describe the dynamics of the swarm based on pairwise interactions according to a Cucker-Smale flocking model, and express the swarm's density evolution as the solution to a system of mean-field hydrodynamic equations. We propose a new family of parametric functions to model the pairwise interactions, which allows for the mean-field macroscopic system of integro-differential equations to be efficiently solved as an augmented system of PDEs. Finally, we incorporate the augmented system in an iterative optimization scheme to learn the dynamics of the interacting agents from observations of the swarm's density evolution over time. The results of this work can offer an alternative approach to study how animal flocks coordinate, create new control schemes for large networked systems, and serve as a central part of defense mechanisms against adversarial drone attacks.


Nonadaptive Output Regulation of Second-Order Nonlinear Uncertain Systems

Lu, Maobin, Guay, Martin, Harry, Telema, Wang, Shimin, Cooper, Jordan

arXiv.org Artificial Intelligence

This paper investigates the robust output regulation problem of second-order nonlinear uncertain systems with an unknown exosystem. Instead of the adaptive control approach, this paper resorts to a robust control methodology to solve the problem and thus avoid the bursting phenomenon. In particular, this paper constructs generic internal models for the steady-state state and input variables of the system. By introducing a coordinate transformation, this paper converts the robust output regulation problem into a nonadaptive stabilization problem of an augmented system composed of the second-order nonlinear uncertain system and the generic internal models. Then, we design the stabilization control law and construct a strict Lyapunov function that guarantees the robustness with respect to unmodeled disturbances. The analysis shows that the output zeroing manifold of the augmented system can be made attractive by the proposed nonadaptive control law, which solves the robust output regulation problem. Finally, we demonstrate the effectiveness of the proposed nonadaptive internal model approach by its application to the control of the Duffing system.


A Cooperation Control Framework Based on Admittance Control and Time-varying Passive Velocity Field Control for Human--Robot Co-carrying Tasks

Van Trong, Dang, Honji, Sumitaka, Wada, Takahiro

arXiv.org Artificial Intelligence

Human--robot co-carrying tasks demonstrate their potential in both industrial and everyday applications by leveraging the strengths of both parties. Effective control of robots in these tasks requires minimizing position and velocity errors to complete the shared tasks while also managing the energy level within the closed-loop systems to prevent potential dangers such as instability and unintended force exertion. However, this collaboration scenario poses numerous challenges due to varied human intentions in adapting to workspace characteristics, leading to human--robot conflicts and safety incidents. In this paper, we develop a robot controller that enables the robot partner to re-plan its path leveraging conflict information, follow co-carrying motions accurately, ensure passivity, and regular the energy of the closed-loop system. A cooperation control framework for human--robot co-carrying tasks is constructed by utilizing admittance control and time-varying Passive Velocity Field Control with a fractional exponent energy compensation control term. By measuring the interaction force, the desired trajectory of co-carrying tasks for the robot partner is first generated using admittance control. Thereafter, the new Passive Velocity Field Control with the energy compensation feature is designed to track the desired time-varying trajectory and guarantee passivity. Furthermore, the proposed approach ensures that the system's kinetic energy converges to the desired level within a finite time interval, which is critical for time-critical applications. Numerical simulation demonstrates the efficiency of the proposed cooperation control method through four collaborative transportation scenarios.


Controller Synthesis from Noisy-Input Noisy-Output Data

Li, Lidong, Bisoffi, Andrea, De Persis, Claudio, Monshizadeh, Nima

arXiv.org Artificial Intelligence

We consider the problem of synthesizing a dynamic output-feedback controller for a linear system, using solely input-output data corrupted by measurement noise. To handle input-output data, an auxiliary representation of the original system is introduced. By exploiting the structure of the auxiliary system, we design a controller that robustly stabilizes all possible systems consistent with data. Notably, we also provide a novel solution to extend the results to generic multi-input multi-output systems. The findings are illustrated by numerical examples.


Model-Free $\delta$-Policy Iteration Based on Damped Newton Method for Nonlinear Continuous-Time H$\infty$ Tracking Control

Wang, Qi

arXiv.org Artificial Intelligence

This paper presents a {\delta}-PI algorithm which is based on damped Newton method for the H{\infty} tracking control problem of unknown continuous-time nonlinear system. A discounted performance function and an augmented system are used to get the tracking Hamilton-Jacobi-Isaac (HJI) equation. Tracking HJI equation is a nonlinear partial differential equation, traditional reinforcement learning methods for solving the tracking HJI equation are mostly based on the Newton method, which usually only satisfies local convergence and needs a good initial guess. Based upon the damped Newton iteration operator equation, a generalized tracking Bellman equation is derived firstly. The {\delta}-PI algorithm can seek the optimal solution of the tracking HJI equation by iteratively solving the generalized tracking Bellman equation. On-policy learning and off-policy learning {\delta}-PI reinforcement learning methods are provided, respectively. Off-policy version {\delta}-PI algorithm is a model-free algorithm which can be performed without making use of a priori knowledge of the system dynamics. NN-based implementation scheme for the off-policy {\delta}-PI algorithms is shown. The suitability of the model-free {\delta}-PI algorithm is illustrated with a nonlinear system simulation.


Robust Fully-Asynchronous Methods for Distributed Training over General Architecture

Zhu, Zehan, Tian, Ye, Huang, Yan, Xu, Jinming, He, Shibo

arXiv.org Artificial Intelligence

Perfect synchronization in distributed machine learning problems is inefficient and even impossible due to the existence of latency, package losses and stragglers. We propose a Robust Fully-Asynchronous Stochastic Gradient Tracking method (R-FAST), where each device performs local computation and communication at its own pace without any form of synchronization. Different from existing asynchronous distributed algorithms, R-FAST can eliminate the impact of data heterogeneity across devices and allow for packet losses by employing a robust gradient tracking strategy that relies on properly designed auxiliary variables for tracking and buffering the overall gradient vector. More importantly, the proposed method utilizes two spanning-tree graphs for communication so long as both share at least one common root, enabling flexible designs in communication architectures. We show that R-FAST converges in expectation to a neighborhood of the optimum with a geometric rate for smooth and strongly convex objectives; and to a stationary point with a sublinear rate for general non-convex settings. Extensive experiments demonstrate that R-FAST runs 1.5-2 times faster than synchronous benchmark algorithms, such as Ring-AllReduce and D-PSGD, while still achieving comparable accuracy, and outperforms existing asynchronous SOTA algorithms, such as AD-PSGD and OSGP, especially in the presence of stragglers.